116 research outputs found

    What Symptoms and How Long? An Interpretable AI Approach for Depression Detection in Social Media

    Get PDF
    Depression is the most prevalent and serious mental illness, which induces grave financial and societal ramifications. Depression detection is key for early intervention to mitigate those consequences. Such a high-stake decision inherently necessitates interpretability. Although a few depression detection studies attempt to explain the decision, these explanations misalign with the clinical depression diagnosis criterion that is based on depressive symptoms. To fill this gap, we develop a novel Multi-Scale Temporal Prototype Network (MSTPNet). MSTPNet innovatively detects and interprets depressive symptoms as well as how long they last. Extensive empirical analyses show that MSTPNet outperforms state-of-the-art depression detection methods. This result also reveals new symptoms that are unnoted in the survey approach. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. This study contributes to IS literature with a novel interpretable deep learning model for depression detection in social media

    What Symptoms and How Long? An Interpretable AI Approach for Depression Detection in Social Media

    Full text link
    Depression is the most prevalent and serious mental illness, which induces grave financial and societal ramifications. Depression detection is key for early intervention to mitigate those consequences. Such a high-stake decision inherently necessitates interpretability. Although a few depression detection studies attempt to explain the decision based on the importance score or attention weights, these explanations misalign with the clinical depression diagnosis criterion that is based on depressive symptoms. To fill this gap, we follow the computational design science paradigm to develop a novel Multi-Scale Temporal Prototype Network (MSTPNet). MSTPNet innovatively detects and interprets depressive symptoms as well as how long they last. Extensive empirical analyses using a large-scale dataset show that MSTPNet outperforms state-of-the-art depression detection methods with an F1-score of 0.851. This result also reveals new symptoms that are unnoted in the survey approach, such as sharing admiration for a different life. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. This study contributes to IS literature with a novel interpretable deep learning model for depression detection in social media. In practice, our proposed method can be implemented in social media platforms to provide personalized online resources for detected depressed patients.Comment: 56 pages, 10 figures, 21 table

    Understanding Health Video Engagement: An Interpretable Deep Learning Approach

    Full text link
    Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Understanding how health misinformation is transmitted is an urgent goal for researchers, social media platforms, health sectors, and policymakers to mitigate those ramifications. Deep learning methods have been deployed to predict the spread of misinformation. While achieving the state-of-the-art predictive performance, deep learning methods lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning approach, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. Improving upon state-of-the-art interpretable methods, GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature when its value varies. We select features according to social exchange theory and evaluate GAN-PiWAD on 4,445 misinformation videos. The proposed approach outperformed strong benchmarks. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning method that is generalizable to understand other human decision factors. Our findings provide direct implications for social media platforms and policymakers to design proactive interventions to identify misinformation, control transmissions, and manage infodemics.Comment: WITS 2021 Best Paper Awar

    An Interpretable Deep Learning Approach to Understand Health Misinformation Transmission on YouTube

    Get PDF
    Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Deep learning methods have been deployed to predict the spread of misinformation, but they lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning that is generalizable to understand human decisions. We provide direct implications to design interventions to identify misinformation, control transmissions, and manage infodemics

    AN IMPROVEMENT TO E-COMMERCE RECOMMENDATION USING PRODUCT NETWORK ANALYSIS

    Get PDF
    To help consumers find the most wanted products effectively, e-commerce recommendation saves lots of time spent on viewing unnecessary web pages and increases revenue for ecommerce websites. Because of this significance, this paper concentrates on technology of recommendation and makes an improvement to recommender system. To solve the problem of current technology that only concerns association between two products, this model considers the product network and guides consumers to view products following the intended path. The object is to maximize revenue of the entire product network. An empirical study of yhd.com shows our model is more effective than current model

    Discovering Barriers to Opioid Addiction Treatment from Social Media: A Similarity Network-Based Deep Learning Approach

    Get PDF
    Opioid use disorder (OUD) refers to the physical and psychological reliance on opioids. OUD costs the US healthcare systems $504 billion annually and poses significant mortality risk for patients. Understanding and mitigating the barriers to OUD treatment is a high-priority area. Current OUD treatment studies rely on surveys with low response rate because of social stigma. In this paper, we explore social media as a new data source to study OUD treatments. We develop the SImilarity Network-based DEep Learning (SINDEL) to discover barriers to OUD treatment from the patient narratives and address the challenge of morphs. SINDEL reaches an F1 score of 76.79%. Thirteen types of OUD treatment barriers were identified and verified by domain experts. This study contributes to IS literature by proposing a novel deep-learning-based analytical approach with impactful implications for health practitioners

    Care for the Mind Amid Chronic Diseases: An Interpretable AI Approach Using IoT

    Full text link
    Health sensing for chronic disease management creates immense benefits for social welfare. Existing health sensing studies primarily focus on the prediction of physical chronic diseases. Depression, a widespread complication of chronic diseases, is however understudied. We draw on the medical literature to support depression prediction using motion sensor data. To connect human expertise in the decision-making, safeguard trust for this high-stake prediction, and ensure algorithm transparency, we develop an interpretable deep learning model: Temporal Prototype Network (TempPNet). TempPNet is built upon the emergent prototype learning models. To accommodate the temporal characteristic of sensor data and the progressive property of depression, TempPNet differs from existing prototype learning models in its capability of capturing the temporal progression of depression. Extensive empirical analyses using real-world motion sensor data show that TempPNet outperforms state-of-the-art benchmarks in depression prediction. Moreover, TempPNet interprets its predictions by visualizing the temporal progression of depression and its corresponding symptoms detected from sensor data. We further conduct a user study to demonstrate its superiority over the benchmarks in interpretability. This study offers an algorithmic solution for impactful social good - collaborative care of chronic diseases and depression in health sensing. Methodologically, it contributes to extant literature with a novel interpretable deep learning model for depression prediction from sensor data. Patients, doctors, and caregivers can deploy our model on mobile devices to monitor patients' depression risks in real-time. Our model's interpretability also allows human experts to participate in the decision-making by reviewing the interpretation of prediction outcomes and making informed interventions.Comment: 39 pages, 12 figure

    Patient Dropout Prediction in Virtual Health: A Multimodal Dynamic Knowledge Graph and Text Mining Approach

    Full text link
    Virtual health has been acclaimed as a transformative force in healthcare delivery. Yet, its dropout issue is critical that leads to poor health outcomes, increased health, societal, and economic costs. Timely prediction of patient dropout enables stakeholders to take proactive steps to address patients' concerns, potentially improving retention rates. In virtual health, the information asymmetries inherent in its delivery format, between different stakeholders, and across different healthcare delivery systems hinder the performance of existing predictive methods. To resolve those information asymmetries, we propose a Multimodal Dynamic Knowledge-driven Dropout Prediction (MDKDP) framework that learns implicit and explicit knowledge from doctor-patient dialogues and the dynamic and complex networks of various stakeholders in both online and offline healthcare delivery systems. We evaluate MDKDP by partnering with one of the largest virtual health platforms in China. MDKDP improves the F1-score by 3.26 percentage points relative to the best benchmark. Comprehensive robustness analyses show that integrating stakeholder attributes, knowledge dynamics, and compact bilinear pooling significantly improves the performance. Our work provides significant implications for healthcare IT by revealing the value of mining relations and knowledge across different service modalities. Practically, MDKDP offers a novel design artifact for virtual health platforms in patient dropout management

    Transparent Polynomial Delegation and Its Applications to Zero Knowledge Proof

    Get PDF
    We present a new succinct zero knowledge argument scheme for layered arithmetic circuits without trusted setup. The prover time is O(C+nlogn)O(C + n \log n) and the proof size is O(DlogC+log2n)O(D \log C + \log^2 n) for a DD-depth circuit with nn inputs and CC gates. The verification time is also succinct, O(DlogC+log2n)O(D \log C + \log^2 n), if the circuit is structured. Our scheme only uses lightweight cryptographic primitives such as collision-resistant hash functions and is plausibly post-quantum secure. We implement a zero knowledge argument system, Virgo, based on our new scheme and compare its performance to existing schemes. Experiments show that it only takes 53 seconds to generate a proof for a circuit computing a Merkle tree with 256 leaves, at least an order of magnitude faster than all other succinct zero knowledge argument schemes. The verification time is 50ms, and the proof size is 253KB, both competitive to existing systems. Underlying Virgo is a new transparent zero knowledge verifiable polynomial delegation scheme with logarithmic proof size and verification time. The scheme is in the interactive oracle proof model and may be of independent interest
    corecore